4 research outputs found
Live Demonstration: Retinal ganglion cell software and FPGA implementation for object detection and tracking
This demonstration shows how object detection and
tracking are possible thanks to a new implementation which
takes inspiration from the visual processing of a particular type
of ganglion cell in the retina
Retinal ganglion cell software and FPGA model implementation for object detection and tracking
This paper describes the software and FPGA
implementation of a Retinal Ganglion Cell model which detects
moving objects. It is shown how this processing, in conjunction
with a Dynamic Vision Sensor as its input, can be used to
extrapolate information about object position. Software-wise, a
system based on an array of these of RGCs has been developed in
order to obtain up to two trackers. These can track objects in a
scene, from a still observer, and get inhibited when saccadic
camera motion happens. The entire processing takes on average
1000 ns/event. A simplified version of this mechanism, with a mean
latency of 330 ns/event, at 50 MHz, has also been implemented in
a Spartan6 FPGA.European Commission FP7-ICT-600954Ministerio de Economía y Competitividad TEC2012-37868-C04-02Junta de Andalucía P12-TIC-130
Neuromorphic Approach Sensitivity Cell Modeling and FPGA Implementation
Neuromorphic engineering takes inspiration from biology to
solve engineering problems using the organizing principles of biological
neural computation. This field has demonstrated success in sensor based
applications (vision and audition) as well in cognition and actuators.
This paper is focused on mimicking an interesting functionality of the
retina that is computed by one type of Retinal Ganglion Cell (RGC).
It is the early detection of approaching (expanding) dark objects. This
paper presents the software and hardware logic FPGA implementation
of this approach sensitivity cell. It can be used in later cognition layers as
an attention mechanism. The input of this hardware modeled cell comes
from an asynchronous spiking Dynamic Vision Sensor, which leads to an
end-to-end event based processing system. The software model has been
developed in Java, and computed with an average processing time per
event of 370 ns on a NUC embedded computer. The output firing rate
for an approaching object depends on the cell parameters that represent
the needed number of input events to reach the firing threshold. For the
hardware implementation on a Spartan6 FPGA, the processing time is
reduced to 160 ns/event with the clock running at 50 MHz.Ministerio de Economía y Competitividad TEC2016-77785-PUnión Europea FP7-ICT-60095
Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics
Taking inspiration from biology to solve engineering problems using the organizing
principles of biological neural computation is the aim of the field of neuromorphic engineering.
This field has demonstrated success in sensor based applications (vision and audition) as well as in
cognition and actuators. This paper is focused on mimicking the approaching detection functionality
of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to
robotics. These RGCs transmit action potentials when an expanding object is detected. In this work
we compare the software and hardware logic FPGA implementations of this approaching function
and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual
input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads
to an end-to-end event based processing system. The software model has been developed in Java,
and computed with an average processing time per event of 370 ns on a NUC embedded computer.
The output firing rate for an approaching object depends on the cell parameters that represent the
needed number of input events to reach the firing threshold. For the hardware implementation, on a
Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz.
The entropy has been calculated to demonstrate that the system is not totally deterministic in response
to approaching objects because of several bioinspired characteristics. It has been measured that a
Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an
attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent
to human reaction latencies to visual stimulus.Ministerio de Economía y Competitividad TEC2016-77785-PComisión Europea FP7-ICT-60095